7 research outputs found

    Robust artificial neural network for reliability analysis

    Get PDF
    Artificial Neural Networks (ANN) are used in place of expensive models to reduce the computational burden required for reliability analysis. Often, ANNs with selected architecture are trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained from the same training data, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the highest R2 value can lead to a biassing in terms of the prediction made by the selected ANN. This is due to the fact that the use of R2 cannot determine if the prediction made by ANN is biased. Additionally, R2 does not indicate if a model is adequate, as it is possible to have a low R2 for a good model and a high R2 for a bad model. Hence we propose an approach to improve the prediction robustness of an ANN based on coupling Bayesian framework and model averaging technique into a unified framework. The model uncertainties propagated to the robust prediction is quantified in terms of confidence intervals. Two examples are used to demonstrate the applicability of the approach

    Uncertainty quantification methods for neural networks pattern recognition

    Get PDF
    On-line monitoring techniques have attracted increasing attention as a promising strategy for improving safety, maintaining availability and reducing the cost of operation and maintenance. In particular, pattern recognition tools such as artificial neural networks are today largely adopted for sensor validation, plant component monitoring, system control, and fault-diagnostics based on the data acquired during operation. However, classic artificial neural networks do not provide an error context for the model response, whose robustness remains thus difficult to estimate. Indeed, experimental data generally exhibit a time/space-varying behaviour and are hence characterized by an intrinsic level of uncertainty that unavoidably affects the performance of the tools adopted and undermines the accuracy of the analysis. For this reason, the propagation of the uncertainty and the quantification of the so called margins of uncertainty in output are crucial in making risk-informed decision. The current study presents a comparison between two different approaches for the quantification of uncertainty in artificial neural networks. The first technique presented is based on the error estimation by a series association scheme, the second approach couples Bayesian model selection technique and model averaging into a unified framework. The efficiency of these two approaches are analysed in terms of their computational cost and predictive performance, through their application to a nuclear power plant fault diagnosis system

    Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems

    Get PDF
    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantifcation, reliability and sensitivity analysis. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R2 cannot determine if the prediction made by ANN is biased. Additionally, R2 does not indicate if a model is adequate, as it is possible to have a low R2 for a good model and a high R2 for a bad model. Hence in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of condence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analysis on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy

    Is "No test is better than a bad test"? Impact of diagnostic uncertainty in mass testing on the spread of Covid-19

    Get PDF
    Testing is viewed as a critical aspect of any strategy to tackle epidemics. Much of the dialogue around testing has concentrated on how countries can scale up capacity, but the uncertainty in testing has not received nearly as much attention beyond asking if a test is accurate enough to be used. Even for highly accurate tests, false positives and false negatives will accumulate as mass testing strategies are employed under pressure, and these misdiagnoses could have major implications on the ability of governments to suppress the virus. The present analysis uses a modified SIR model to understand the implication and magnitude of misdiagnosis in the context of ending lockdown measures. The results indicate that increased testing capacity alone will not provide a solution to lockdown measures. The progression of the epidemic and peak infections is shown to depend heavily on test characteristics, test targeting, and prevalence of the infection. Antibody based immunity passports are rejected as a solution to ending lockdown, as they can put the population at risk if poorly targeted. Similarly, mass screening for active viral infection may only be beneficial if it can be sufficiently well targeted, otherwise reliance on this approach for protection of the population can again put them at risk. A well targeted active viral test combined with a slow release rate is a viable strategy for continuous suppression of the virus.</jats:p

    Is “no test is better than a bad test”? Impact of diagnostic uncertainty in mass testing on the spread of COVID-19

    Get PDF
    AbstractTesting is viewed as a critical aspect of any strategy to tackle epidemics. Much of the dialogue around testing has concentrated on how countries can scale up capacity, but the uncertainty in testing has not received nearly as much attention beyond asking if a test is accurate enough to be used. Even for highly accurate tests, false positives and false negatives will accumulate as mass testing strategies are employed under pressure, and these misdiagnoses could have major implications on the ability of governments to suppress the virus. The present analysis uses a modified SIR model to understand the implication and magnitude of misdiagnosis in the context of ending lockdown measures. The results indicate that increased testing capacity alone will not provide a solution to lockdown measures. The progression of the epidemic and peak infections is shown to depend heavily on test characteristics, test targeting, and prevalence of the infection. Antibody based immunity passports are rejected as a solution to ending lockdown, as they can put the population at risk if poorly targeted. Similarly, mass screening for active viral infection may only be beneficial if it can be sufficiently well targeted, otherwise reliance on this approach for protection of the population can again put them at risk. A well targeted active viral test combined with a slow release rate is a viable strategy for continuous suppression of the virus.</jats:p
    corecore